Ridge regression addresses some of the problems of Oridnatry Least Squares by imposing a penalty on the size of coefficients. The ridge coefficients minimize a penalized residual sum of squares
Here $\alpha \gt 0$ is a complexity parameter that controls the amount of shrinkage.
Ridge will take in its fit method arrays X, y and will store the coefficient $w$ of the linear model in its coef_ member
In [1]:
from sklearn import linear_model
clf = linear_model.Ridge(alpha = .5)
clf.fit([[0,0],[0,0],[1,1]], [0, .1, 1])
Out[1]:
In [2]:
clf.coef_
Out[2]:
In [3]:
clf.intercept_
Out[3]:
In [5]:
from sklearn import linear_model
clf = linear_model.RidgeCV(alphas=[0.1, 1.0, 10.0])
clf.fit([[0,0],[0,0],[1,1]],[0,.1,1])
Out[5]:
In [6]:
clf.alpha_
Out[6]: